Migration using SAP HANA db as the database
  • 25 Mar 2024
  • 14 Minutes to read
  • Contributors
  • Dark
    Light

Migration using SAP HANA db as the database

  • Dark
    Light

Article Summary

This topic is related to sections:

About Using SAP HANA as the database

When the project uses SAP HANA DB as the migration database, the transformation of data takes place within SAP HANA database. Syniti Migrate will remain stored within a SQL Server database as well as the Syniti Construct tables. These notes are very important when using SAP HANA DB as the migration working database:

  • Each database will have a schema of same name (ex: TGTECC.TGTECC.Tablename).

  • Within HANA DB, a database is not unique for each object. One database is set for all of the objects, and the schema replaces the database for object naming.

  • HANA DB allows for sentence structure naming (camel case) for tables and columns and uses Case Sensitivity so zActive is not same as ZACTIVE. HANA BPT cloud version instances are case sensitive and must use caps. Syniti methodology requires all caps in naming.

  • Use of NULL in HANA DB is scripted as IFNULL -   WHERE IFNULL(T.LIFNR,'''' '''') = '''' '''''' 

  • The cloud version of HANA (BPT) does not support the "UPDATE FROM" syntax like the on prem version of HANA DB. Instead, users must use "MERGE INTO" syntax when building update rules.

Preparation for using SAP HANA Database

HANA Drivers setup for Migration

During the setup of Syniti Migrate within the SQL Server system, the Admin installs the HANA drivers with the same version as the Syniti Migrate install. These drivers are stored in the folder in Syniti for 3rd Party Install Files for each version of Syniti Migrate. For SAP HANA BTP, the latest version of HANA drivers are suggested which support the BTP connection format.

Note

The user need not setup the ODBC Connection to HANA on the server. This is not necessary. Setup the ODBC drivers. The connection setup will be done within Syniti Migrate Administer > Setup > Servers or by following the Guided Setup which provides sequenced steps for prepare a new MIGRATE system using a HANA db.

The Admin will setup SAP HANA DB as a Migration Working Database Server within Administer > Setup > Guided Setup within the initial weeks on site. Refer to section Initial Setup - First Week for details of the initial system setup.

null

Datasource Server setup as HANA db

SRCCONSTRUCT Build in HANA DB

This section follows the same steps as the SQL Server db process, except that the Schemas are named same as the databases. Create CONSTRUCT Datasource as a Snapshot Source Datasource (SRCCONSTRUCT) within HANA DB. This new Snapshot datasource stores the CONSTRUCT table data for use during Mock runs as the table data remains static. Each Construct table built within Syniti Construct and stored within SQL Server CONSTRUCT database is added to the SRCCONSTRUCT datasource within Snapshot Management for HANA DB and the Metadata is then imported.

null

Datasources: Source Snapshot of CONSTRUCT db: Add Tables

Note

Along with storing all Construct pages as tables within the SRCCONSTRUCT database in HANA DB, the tables XTVALUEMAP and LIST_TARGET_VALUE are stored in SRCCONSTRUCT db as Snapshot Datasource tables to provide access for Value Mapping work. XREF ETL Logic uses the values in the snapshot table of XTVALUEMAP for translations. The ETL tasks INSERT into MIGRATE.dbo.XTVALUEMAP and update target values from SRCCONSTRUCT.dbo.XTVALUEMAP. The LIST_TARGET_VALUE table provides the Target Value, Load Value, and Description of each record by Datasource and Check Table. 

This Source Snapshot is connected to the CONSTRUCT datasource that comes delivered and points to the Syniti Migrate system CONSTRUCT database in SQL Server. As each table is added to the Source Snapshot datasource, the system attempts to import the Metadata (Columns, etc.)

null

Datasources: Source Snapshot Datasource SRCCONSTRUCT: Import Table Metadata

As with any Snapshot Datasource, the SRCCONSTRUCT datasource is built out for XML. The new tables automatically generate within the HANA DB, and the Import Project and all tables script in XML for import to DataServices.

Report Schema in HANA DB

The REPORT database is stored within SAP HANA DB, and the migration of data generates the report tables and store them within this REPORT db on SAP HANA DB. The Parameters page designates the appropriate datasource for reports. As the object executes the job for conversion, reports generate as tables within this HANA DB schema - REPORT for the static report data.

Note

Access to SAP HANA Database images through DBeaver software.

null

HANA DB: Schema REPORT: List generated Reports

Parameters

Access the Parameters page with pathway:  Administer > Setup > Parameters from the Syniti Migrate Home page or by following the Guided Setup which provides sequenced steps for prepare a new MIGRATE system using a HANA db.

The Admin will setup the Parameters within Administer > Setup > Guided Setup within the initial weeks on site. Refer to section Initial Setup - First Week for details of the initial system setup. Refer to section Administer > Setup > Parameters for details of this section setup and use.

null

Parameters

Datasources in SAP HANA DB

The Admin will create the Application and Instance for SAP HANA DB as "Syniti Migration" within Datasources to point to this database system. Each system source, working database, and system target will be setup to point to the client site HANA server.

Within HANA DB, the schemas are the same name as the databases. Within the user Login, the Catalog section will provide each of the SCHEMA (Where each Source and Target system is identified, and tables and views are stored).

null

HANA DB: List of Systems, Schema, and Tables

Once the Snapshot Datasource (Target or Source) is built within Syniti Migrate and linked to the System Datasource, the user may BUILD XML for the import process.

Note

The assumption is that the Source or Target Datasource are setup with active connections, and the tables and fields are imported to these Datasources in advance of building the snapshot datasource XML.

null

Snapshot Datasource Details: Tables tab

Note

If the SCHEMA is not properly defined, the tables don't generate in HANA DB and the XML script is wrong.

null

Snapshot Datasource: Sources tab

null

Snapshot Datasource: View XML scripting

The tables are automatically built within HANA DB as shown below. If the tables already exist within the snapshot database, the Debug Log is empty - the table already exists and is not rebuilt.

null

Job Queue: BUILD XML and HANA Tables

This is an example of the scripting for building tables within HANA DB:

DO BEGIN DECLARE EXIT HANDLER FOR SQL_ERROR_CODE 288 SELECT ::SQL_ERROR_MESSAGE FROM DUMMY; CREATE COLUMN TABLE TGTBAR.COUNTRY_LIST( "Country" NVARCHAR(3) CS_STRING, "Name" NVARCHAR(50) CS_STRING, CONSTRAINT PK_COUNTRY_LIST PRIMARY KEY ( "Country")); END;

In the example above, the table (COUNTRY_LIST) is being created within schema TGTBAR - the target snapshot.

If successful, the newly created tables will display within HANA DB for this schema = TGTBAR

Note

Make sure to always Refresh the schema after running code build to make these new items visible.

null

HANA DB: BUILD Tables within Target Snapshot Schema

This is an example of the scripting for building tables within HANA DB where the Error is due to a Field missing in the Table used in this view. The field TITLE does not exist in the table from which this report is built.

Resolve by adding this field TITLE to the table and the report generates successfully.

DO BEGIN DECLARE EXIT HANDLER FOR SQL_ERROR_CODE 322 SELECT ::SQL_ERROR_MESSAGE FROM DUMMY; 

The Error message within the Steps provides - Invalid column name: "WRKHANA"."(DO statement)": line 3 col 4 (at pos 98): T.TITLE: line 4 col 11 (at pos 218)

Build the System Views

When the user attempts to Build System Views for the Project / Release / Subject Area within System Views, a Job Queue initiates to create these views within HANA DB from the target system.

The Snapshot Datasource is the 'TGT' for the target system. In this case, TGTBAR is selected. The Schema in HANA DB is the target snapshot schema or TGTBAR.

null

System Views: Build System Views

Click Build button to run a Job Queue that similar to using SQL SERVER for the database. This job first DROPs the View, and next it CREATEs the View, so there are twice as many steps as there should be tables for the Target Snapshot Schema.

null

Job Queue: Build System Views for the Subject Area

An example of this scripting is shown below:

Drop View

DO BEGIN DECLARE EXIT HANDLER FOR SQL_ERROR_CODE 321 SELECT ::SQL_ERROR_MESSAGE FROM DUMMY; EXEC 'DROP VIEW WRKHANA."""REGION_LIST_Cust"""'; END;

Create View

CREATE VIEW WRKHANA.REGION_LIST_Cust AS SELECT * FROM TGTS4H.REGION_LIST 

Scripting for Datasets within Mapping

There are no changes to the ETL - XML scripting. The Target Tables automatically generate within HANA DB if the delivered Database Type Translations are correct from SQL Server to HANA DB and within HANA DB itself. These translations are delivered to work successfully and so there is no need to modify them.

null

Administer: Database Types: Translation of HANA field format

Syniti methodology applies all caps for field and table naming. HANA DB allows for camel style or sentence style lettering - zACTIVE rather than requiring all caps - ZACTIVE. The user must put double quotes (") around the field names when they have either a slash (/) or sentence structure lettering for the table to create without issues. An Example of scripting is listed below as well as the resulting table in the HANA DB.

If using Quest Toad - Create Column Table so that the table can be run more efficiently in HANA DB - otherwise, ignore this note.

Create Column Table BOM_STKO_T -

CREATE TABLE [BOM_STKO_T]( [zERROR_LOG] NVARCHAR(2000), [zLOADED] BIT DEFAULT 0, [zLOADDATE] DATE, [zDATA_READY] BIT DEFAULT 1, [zSTAGE_READY] BIT DEFAULT 1, [zTARGET_READY] BIT DEFAULT 1, [zCHUNK] INT DEFAULT 1, [zREMEDIATED] BIT DEFAULT 0, [zINTARGETSYS] BIT DEFAULT 0, [zDATASOURCE] NVARCHAR(50), [ZDEPLOY] NVARCHAR(10), [zCRITICAL] BIT DEFAULT 0, CONSTRAINT [PK_BOM_STKO_T] PRIMARY KEY ([zDATASOURCE], [ZDEPLOY]))

Should be instead -

CREATE COLUMN TABLE [BOM_STKO_T]( [zERROR_LOG] NVARCHAR(2000)…

As example of a HANA DB Create Table script:

CREATE COLUMN TABLE WRKHANA.CMDK_KNA1_T( ZDATASOURCE NVARCHAR(50) CS_STRING, ZDEPLOY NVARCHAR(10) CS_STRING, ZLEGACYKUNNR NVARCHAR(50) CS_STRING, MANDT NVARCHAR(3), KUNNR NVARCHAR(10), NAME1 NVARCHAR(35), ORT01 NVARCHAR(35), PSTLZ NVARCHAR(10), REGIO NVARCHAR(3), STRAS NVARCHAR(35), TELF1 NVARCHAR(16), SPRAS NVARCHAR(1), UMJAH NVARCHAR(4), UWAER NVARCHAR(5), UMSA1 DECIMAL(15,2), UPDAT NVARCHAR(8), "/SAPCEM/DBER" NVARCHAR(8), "zERROR_LOG" NVARCHAR(2000) CS_STRING, "zLOADED" INT CS_INT DEFAULT 0, "zLOADDATE" DATE, "zDATA_READY" INT CS_INT DEFAULT 1, "zSTAGE_READY" INT CS_INT DEFAULT 1, "zTARGET_READY" INT CS_INT DEFAULT 1, "zCHUNK" INT CS_INT DEFAULT 1, "zREMEDIATED" INT CS_INT DEFAULT 0, "zINTARGETSYS" INT CS_INT DEFAULT 0, ZCRITICAL INT CS_INT DEFAULT 0, CONSTRAINT PK_CMDK_KNA1_T PRIMARY KEY (ZDATASOURCE, ZDEPLOY, ZLEGACYKUNNR)) 

HANA DB will allow for camel style or sentence style lettering - zACTIVE rather than requiring all caps - ZACTIVE. The user must put double quotes (") around the field names when they have either a slash (/) or sentence structure lettering in order for the table to create without issues. An Example of scripting is listed below as well as the resulting table in the HANA DB.

null

Mappings: BUILD XML for Dataset: Create Table for HANA DB

null

HANA DB: Table Definition for field with slashes

Scripting for Sources within Mapping

In this effort there are no changes to the ETL - XML scripting. The scripting of DDL for HANA DB uses the language of this system. The Source Table automatically generates within HANA DB if the delivered Database Type Translations are correct from SQL Server to HANA DB and within HANA DB itself.

Once the mapping effort has been completed, Build the XML for the source. The View XML page shows minor coding changes for HANA DB language.

null

Mappings: Source: Details: View XML

An example of scripting for HANA database is shown below:

CREATE COLUMN TABLE WRKHANA.CMDK_KNA1_ADV_ADV_S( ADDRESSID INTEGER CS_INT NOT NULL, ADDRESSLINE1 NVARCHAR(60) CS_STRING, ADDRESSLINE2 NVARCHAR(60) CS_STRING, CITY NVARCHAR(30) CS_STRING, STATEPROVINCEID INTEGER CS_INT, POSTALCODE NVARCHAR(15) CS_STRING, SPATIALLOCATION NVARCHAR(100) CS_STRING, ROWGUID INTEGER CS_INT, MODIFIEDDATE DATE CS_DATE, ZDATASOURCE NVARCHAR(50) CS_STRING, ZDEPLOY NVARCHAR(10) CS_STRING, ZLEGACYKUNNR NVARCHAR(50) CS_STRING, "zMANDT" NVARCHAR(3), "zKUNNR" NVARCHAR(10), "zNAME1" NVARCHAR(35), "zXrefNAME1" NVARCHAR(100) CS_STRING, "zORT01" NVARCHAR(35), "zPSTLZ" NVARCHAR(10), "zREGIO" NVARCHAR(3), "zXrefREGIO" NVARCHAR(100) CS_STRING, "zSTRAS" NVARCHAR(35), "zTELF1" NVARCHAR(16), "zSPRAS" NVARCHAR(1), "zUMJAH" NVARCHAR(4), "zUWAER" NVARCHAR(5), "zXrefUWAER" NVARCHAR(100) CS_STRING, "zUMSA1" DECIMAL(15,2), "zUPDAT" NVARCHAR(8), "z/SAPCEM/DBER" NVARCHAR(8), ZCRITICAL INTEGER CS_INT, ZACTIVE INTEGER CS_INT, "zINTEGER" INTEGER CS_INT, "zDUPLICATE_SOURCE" NVARCHAR(50) CS_STRING, "zDUPLICATE_OF" NVARCHAR(50) CS_STRING, "zERROR_LOG" NVARCHAR(2000) CS_STRING, "zINACTIVE_REASON" NVARCHAR(2000) CS_STRING, CONSTRAINT PK_CMDK_KNA1_ADV_ADV_S PRIMARY KEY (ADDRESSID)) 
null

HANA DB: Table Definition for field with camel case

Creating Target Reports

The Reporting process in this scenario builds the views within the Working Database (HANA DB) and the reporting data stores in REPORT db within SQL Server where Migration is run. Most of the standard reports auto generate and build views as expected. It is important to note that in the case of the Not IN reports, the Target system (ECC, S4H) Data Dictionary tables (DD tables) need to be stored and refreshed within the Target Datasource so that these Check table reports may build the appropriate joins. Should any of the Not IN reports fail to build, the issue is with the DD tables of the Target Datasource. The user can manually correct these views for use.

The Counts and Recon reports build within the database.

ALL ERRORS Report

  • Report is registered, but not created in the Database until the Report Refresh process.

  • Report re-generates each time an Error report is changed (Added, Deleted, or Marked In-Active).

    Note

    Marking In-Active requires the report Refreshed to initiate the Re-generate of All Errors Report.

The HANA DB is like Oracle but in the case of Granting Access, HANA DB does not need GRANT ACCESS scripting.

Each user account for HANA DB should already have full access to all Source, Target, Report, and Working databases. So, the GRANT ACCESS of permissions should not be necessary.

null

Job Queue: Target Reports build

Note

For any step that shows status of ERROR, click on the Edit  icon to view the error messaging. The Parameter of SQL  provides a Value of the scripting of that report. The user should attempt to manually build the same report within HANA DB to view the extended error messaging with assistance from HANA DB for issues.

An example of the scripting for Target Reports in HANA DB is shown below:

DO BEGIN DECLARE EXIT HANDLER FOR SQL_ERROR_CODE 322 SELECT ::SQL_ERROR_MESSAGE FROM DUMMY; EXEC 'DO BEGIN DECLARE EXIT HANDLER FOR SQL_ERROR_CODE 322 SELECT ::SQL_ERROR_MESSAGE FROM DUMMY; EXEC ''CREATE VIEW WRKHANA."FBDK_VEN_T_VENDOR_ISNULL" AS SELECT T.ZDATASOURCE, T.ZDEPLOY, T.ZLEGACYVENDORID, T."VENDOR" FROM FBDK_VEN_T T WHERE IFNULL(T.VENDOR,'''' '''') = '''' ''''''; END;'; END;

Value Mapping Configuration for HANA

The process allows users to configure the list of Check Tables for use in Value Mapping.

Check Tables are used to validate and control values entered to fields, primarily using SAP ETL. Within the Mapping exercise, target fields are assigned as a mapping type of XREF (Cross reference) and the assigned Check Table and its values are added to Value Mapping using table XTVALUEMAP within the Syniti Construct SQL Server database. The process of running the conversion within the ETL will insert to the MIGRATE.dbo.XTVALUEMAP table and update to the SRCCONSTRUCT.dbo.XTVALUEMAP table.

The team maps from source values to the check table target values in a Value Mapping exercise to produce valid translation. But, in some cases, the target values do not exist within the Target ETL. Or, the Target Check table does not yet exist within the Target ETL. This section provides the user with ability to create Check Tables and add fields within Check Tables for use in the Value Mapping process.

Once the Target ETL becomes synced with the values in the Check Table, these manually configured values change status to system configured values. Users may Insert, Update, Delete, or mark inactive the values in the Check Tables.

In the case where SQL Server is not the working database (ie: Oracle or HANA DB), a Snapshot Datasource must be built within the Migration SQL Server db for this process to work. Also, the local Snapshot must contain the required DD Tables including DD02L, DD02T, DD03L, DD04T, DD05P, and DD08L as well as any of the check tables used in Enrichment. Refer to section Administer > Value Mapping Config for details of this process and use.


Was this article helpful?